47 research outputs found

    Automated Extraction of Fire Line Parameters from Multispectral Infrared Images

    Get PDF
    Remotely sensed infrared images are often used to assess wildland ¯re conditions. Separately, ¯re propagation models are in use to forecast future conditions. In the Dynamic Data Driven Application System (DDDAS) concept, the ¯re propagation model will react to the image data, which should produce more accurate predictions of ¯re propagation. In this study we describe a series of image processing tools that can be used to extract ¯re propagation parameters from multispectral infrared images so that the parameters can be used to drive a ¯re propagation model built upon the DDDAS concept. The method is capable of automatically determining the ¯re perimeter, active ¯re line, and ¯re propagation direction. A multi-band image gradient calculation, the Normalized Di®erence Vegetation Index, and the Normalized Di®erence Burn Ratio along with several standard image processing techniques are used to identify and constrain the ¯re propagation parameters. These ¯re propagation parameters can potentially be used within the DDDAS modeling framework for model update and adjustment

    An Approach for Combining Airborne LiDAR and High-Resolution Aerial Color Imagery using Gaussian Processes

    Get PDF
    Changes in vegetation cover, building construction, road network and traffic conditions caused by urban expansion affect the human habitat as well as the natural environment in rapidly developing cities. It is crucial to assess these changes and respond accordingly by identifying man-made and natural structures with accurate classification algorithms. With the increase in use of multi-sensor remote sensing systems, researchers are able to obtain a more complete description of the scene of interest. By utilizing multi-sensor data, the accuracy of classification algorithms can be improved. In this paper, we propose a method for combining 3D LiDAR point clouds and high-resolution color images to classify urban areas using Gaussian processes (GP). GP classification is a powerful non-parametric classification method that yields probabilistic classification results. It makes predictions in a way that addresses the uncertainty of real world. In this paper, we attempt to identify man-made and natural objects in urban areas including buildings, roads, trees, grass, water and vehicles. LiDAR features are derived from the 3D point clouds and the spatial and color features are extracted from RGB images. For classification, we use the Laplacian approximation for GP binary classification on the new combined feature space. The multiclass classification has been implemented by using one-vs-all binary classification strategy. The result of applying support vector machines (SVMs) and logistic regression (LR) classifier is also provided for comparison. Our experiments show a clear improvement of classification results by using the two sensors combined instead of each sensor separately. Also we found the advantage of applying GP approach to handle the uncertainty in classification result without compromising accuracy compared to SVM, which is considered as the state-of-the-art classification method

    Transfer Learning for High Resolution Aerial Image Classification

    Get PDF
    With rapid developments in satellite and sensor technologies, increasing amount of high spatial resolution aerial images have become available. Classification of these images are important for many remote sensing image understanding tasks, such as image retrieval and object detection. Meanwhile, image classification in the computer vision field is revolutionized with recent popularity of the convolutional neural networks (CNN), based on which the state-of-the-art classification results are achieved. Therefore, the idea of applying the CNN for high resolution aerial image classification is straightforward. However, it is not trivial mainly because the amount of labeled images in remote sensing for training a deep neural network is limited. As a result, transfer learning techniques were adopted for this problem, where the CNN used for the classification problem is pre-trained on a larger dataset beforehand. In this paper, we propose a specific fine-tuning strategy that results in better CNN models for aerial image classification. Extensive experiments were carried out using the proposed approach with different CNN architectures. Our proposed method shows competitive results compared to the existing approaches, indicating the superiority of the proposed fine-tuning algorith

    Automated Algorithm for the Identification of Artifacts in Mottled and Noisy Images

    Get PDF
    We describe a method for automatically classifying image-quality defects on printed documents. The proposed approach accepts a scanned image where the defect has been localized a priori and performs several appropriate image processing steps to reveal the region of interest. A mask is then created from the exposed region to identify bright outliers. Morphological reconstruction techniques are then applied to emphasize relevant local attributes. The classification of the defects is accomplished via a customized tree classifier that utilizes size or shape attributes at corresponding nodes to yield appropriate binary decisions. Applications of this process include automated/assisted diagnosis and repair of printers/copiers in the field in a timely fashion. The proposed technique was tested on a database of 276 images of synthetic and real-life defects with 94.95% accuracy

    Survey of contemporary trends in color image segmentation

    Full text link

    Low-dimensional Representations of Hyperspectral Data for Use in CRF-based Classification

    Get PDF
    Probabilistic graphical models have strong potential for use in hyperspectral image classification. One important class of probabilisitic graphical models is the Conditional Random Field (CRF), which has distinct advantages over traditional Markov Random Fields (MRF), including: no independence assumption is made over the observation, and local and pairwise potential features can be defined with flexibility. Conventional methods for hyperspectral image classification utilize all spectral bands and assign the corresponding raw intensity values into the feature functions in CRFs. These methods, however, require significant computational efforts and yield an ambiguous summary from the data. To mitigate these problems, we propose a novel processing method for hyperspectral image classification by incorporating a lower dimensional representation into the CRFs. In this paper, we use representations based on three types of graph-based dimensionality reduction algorithms: Laplacian Eigemaps (LE), Spatial-Spectral Schroedinger Eigenmaps (SSSE), and Local Linear Embedding (LLE), and we investigate the impact of choice of representation on the subsequent CRF-based classifications
    corecore